当系统中有某些未知术语和隐藏的物理机制时,基于第一原理的复杂物理系统的管理方程可能会非常具有挑战性。在这项工作中,我们采用深度学习体系结构来学习基于从完全动力学模型中获取的数据的等离子体系统的流体部分微分方程(PDE)。证明了学到的多臂流体PDE可以融合诸如Landau阻尼等动力学效应。基于学习的流体闭合,数据驱动的多音阶流体建模可以很好地再现从完全动力学模型中得出的所有物理量。Landau阻尼的计算阻尼率与完全动力学的模拟和线性理论一致。用于复杂物理系统的PDE的数据驱动的流体建模可以应用于改善流体闭合并降低全球系统多规模建模的计算成本。
translated by 谷歌翻译
作为一个新兴的安全学习范式,在利用跨机构私人数据中,垂直联合学习(VFL)有望通过启用广告商和发布者私人拥有的补充用户属性的联合学习来改善广告模型。但是,将其应用于广告系统有两个关键的挑战:a)标记的重叠样本的有限规模,b)实时跨机构服务的高成本。在本文中,我们提出了一个半监督的拆卸框架VFED-SSD,以减轻这两个限制。我们确定:i)广告系统中有大量未标记的重叠数据,ii)我们可以通过分解联合模型来保持模型性能和推理成本之间的平衡。具体而言,我们开发了一个自制任务匹配的配对检测(MPD),以利用垂直分区的未标记数据并提出拆分知识蒸馏(SplitKD)架构,以避免跨机构服务。对三个工业数据集的实证研究表现出我们方法的有效性,在本地部署模式和联合部署模式下,所有数据集的中位数AUC分别提高了0.86%和2.6%。总体而言,我们的框架为实时展示广告提供了一种有效的联邦增强解决方案,其部署成本和大量绩效提升。
translated by 谷歌翻译
随着交通预测技术的发展,时尚预测模型引起了学术界社区和工业的越来越多。然而,大多数现有的研究侧重于减少模型的预测误差,而是忽略由区域内空间事件的不均匀分布引起的错误。在本文中,我们研究了区域分区问题,即最佳网格尺寸选择问题(OGSS),其目的是通过选择最佳网格尺寸来最小化时空预测模型的真正误差。为了解决OGSS,我们通过最小化其上限来分析时空预测模型的真正误差的上限,并最大限度地减少真实误差。通过深入分析,我们发现当模型网格数量从1增加到最大允许值时,真正误差的上限将减少随后增加。然后,我们提出了两种算法,即三元搜索和迭代方法,自动找到最佳网格尺寸。最后,实验验证了预测误差是否具有与其上限相同的趋势,并且实际误差的上限相对于模型网格数量的上限的变化趋势将降低。同时,在一个情况下,通过选择最佳网格尺寸,可以提高最先进的预测算法的订单调度结果高达13.6%,这表明了我们在调整该区域上的方法的有效性用于时空预测模型的分区。
translated by 谷歌翻译
深度加强学习(RL)代理在一系列复杂的控制任务中变得越来越精通。然而,由于引入黑盒功能,代理的行为通常很难解释,使得难以获得用户的信任。虽然存在一些基于视觉的RL的有趣的解释方法,但大多数人都无法发现时间因果信息,提高其可靠性的问题。为了解决这个问题,我们提出了一个时间空间因果解释(TSCI)模型,以了解代理人的长期行为,这对于连续决策至关重要。 TSCI模型构建了颞会因果关系的制定,这反映了连续观测结果与RL代理的决策之间的时间因果关系。然后,采用单独的因果发现网络来识别时间空间因果特征,这被限制为满足时间因果关系。 TSCI模型适用于复发代理,可用于发现培训效率高效率的因果特征。经验结果表明,TSCI模型可以产生高分辨率和敏锐的关注掩模,以突出大多数关于视觉的RL代理如何顺序决策的最大证据的任务相关的时间空间信息。此外,我们还表明,我们的方法能够为从时刻视角提供有价值的基于视觉的RL代理的因果解释。
translated by 谷歌翻译
开发会话剂与患者相互作用并提供主要的临床建议,由于其巨大的应用潜力引起了人们的关注,尤其是在COVID-19-19大流行时期。但是,端到端神经对话系统的培训受到数量不足的医学对话语料库的限制。在这项工作中,我们首次尝试建立和发布与12种常见的胃肠道疾病相关的大规模高质量医学对话数据集,名为MEDDG,并从在线健康咨询社区收集了超过17K的对话。在MEDDG的每次对话中,都会注释五种不同类别的实体,包括疾病,症状,属性,测试和药物。为了推动对建立专家敏感的医学对话系统的未来研究,我们提出了基于MEDDG数据集的两种医疗对话任务。一个是下一个实体预测,另一个是医生的反应生成。为了明确理解这两项医学对话任务,我们实施了几个最先进的基准,并设计了两个对话模型,并进一步考虑了预测的实体。实验结果表明,训练前语言模型和其他基线在我们数据集中的性能差的两项任务上都挣扎,并且可以在辅助实体信息的帮助下增强响应质量。从人类评估来看,简单的检索模型的表现优于几个最新的生成模型,这表明仍然有一个很大的改进空间可以改善产生有意义的反应。
translated by 谷歌翻译
无模型的深度加强学习(RL)算法已广泛用于一系列复杂的控制任务。然而,慢的收敛和样本效率低下在R1中仍然具有挑战性,特别是在处理连续和高维状态空间时。为了解决这个问题,我们提出了一种通过绘制潜在的Anderson加速度(RAA)的想法,提出了一种无模型的非政策深度RL算法的一般加速方法,这是加速扰动解决固定点问题的有效方法。具体来说,我们首先解释如何使用Anderson加速直接应用策略迭代。然后,我们通过引入正则化术语来扩展RAA,以控制函数近似误差引起的扰动的影响。我们进一步提出了两种策略,即逐步更新和自适应重启,以提高性能。我们的方法的有效性在各种基准任务中评估,包括Atari 2600和Mujoco。实验结果表明,我们的方法大大提高了最先进的深度RL算法的学习速度和最终性能。
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译
Rankings are widely collected in various real-life scenarios, leading to the leakage of personal information such as users' preferences on videos or news. To protect rankings, existing works mainly develop privacy protection on a single ranking within a set of ranking or pairwise comparisons of a ranking under the $\epsilon$-differential privacy. This paper proposes a novel notion called $\epsilon$-ranking differential privacy for protecting ranks. We establish the connection between the Mallows model (Mallows, 1957) and the proposed $\epsilon$-ranking differential privacy. This allows us to develop a multistage ranking algorithm to generate synthetic rankings while satisfying the developed $\epsilon$-ranking differential privacy. Theoretical results regarding the utility of synthetic rankings in the downstream tasks, including the inference attack and the personalized ranking tasks, are established. For the inference attack, we quantify how $\epsilon$ affects the estimation of the true ranking based on synthetic rankings. For the personalized ranking task, we consider varying privacy preferences among users and quantify how their privacy preferences affect the consistency in estimating the optimal ranking function. Extensive numerical experiments are carried out to verify the theoretical results and demonstrate the effectiveness of the proposed synthetic ranking algorithm.
translated by 谷歌翻译
In this work, we focus on instance-level open vocabulary segmentation, intending to expand a segmenter for instance-wise novel categories without mask annotations. We investigate a simple yet effective framework with the help of image captions, focusing on exploiting thousands of object nouns in captions to discover instances of novel classes. Rather than adopting pretrained caption models or using massive caption datasets with complex pipelines, we propose an end-to-end solution from two aspects: caption grounding and caption generation. In particular, we devise a joint Caption Grounding and Generation (CGG) framework based on a Mask Transformer baseline. The framework has a novel grounding loss that performs explicit and implicit multi-modal feature alignments. We further design a lightweight caption generation head to allow for additional caption supervision. We find that grounding and generation complement each other, significantly enhancing the segmentation performance for novel categories. We conduct extensive experiments on the COCO dataset with two settings: Open Vocabulary Instance Segmentation (OVIS) and Open Set Panoptic Segmentation (OSPS). The results demonstrate the superiority of our CGG framework over previous OVIS methods, achieving a large improvement of 6.8% mAP on novel classes without extra caption data. Our method also achieves over 15% PQ improvements for novel classes on the OSPS benchmark under various settings.
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译